Halting problem

In computability theory, the halting problem can be stated as follows: Given a description of a computer program, decide whether the program finishes running or continues to run forever. This is equivalent to the problem of deciding, given a program and an input, whether the program will eventually halt when run with that input, or will run forever.

Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. A key part of the proof was a mathematical definition of a computer and program, what became known as a Turing machine; the halting problem is undecidable over Turing machines. It is one of the first examples of a decision problem.

Jack Copeland (2004) attributes the actual term halting problem to Martin Davis.[1]

Contents

Introduction

The halting problem is a decision problem about properties of computer programs on a fixed Turing-complete model of computation, i.e. all programs that can be written in some given programming language that is general enough to be equivalent to a Turing machine. The problem is to determine, given a program and an input to the program, whether the program will eventually halt when run with that input. In this abstract framework, there are no resource limitations on the amount of memory or time required for the program's execution; it can take arbitrarily long, and use arbitrarily much storage space, before halting. The question is simply whether the given program will ever halt on a particular input.

For example, in pseudocode, the program

while True: continue

does not halt; rather, it goes on forever in an infinite loop. On the other hand, the program

print "Hello World!"

halts very quickly.

A more complex program might be harder to analyze. The program could be run for some fixed time. If the program does not halt, there is in general no way to know if the program will eventually halt or run forever. Turing proved there is no algorithm which can be applied to any arbitrary program and input to decide whether the program stops when run with that input.

Importance and consequences

The halting problem is historically important because it was one of the first problems to be proved undecidable. (Turing's proof went to press in May 1936, whereas Alonzo Church's proof of the undecidability of a problem in the lambda calculus had already been published in April 1936.) Subsequently, many other undecidable problems have been described; the typical method of proving a problem to be undecidable is with the technique of reduction. To do this, it is sufficient to show that if a solution to the new problem were found, it could be used to decide an undecidable problem by transforming instances of the undecidable problem into instances of the new problem. Since we already know that no method can decide the old problem, no method can decide the new problem either. Often the new problem is reduced to solving the halting problem.

For example, one such consequence of the halting problem's undecidability is that there cannot be a general algorithm that decides whether a given statement about natural numbers is true or not. The reason for this is that the proposition stating that a certain algorithm will halt given a certain input can be converted into an equivalent statement about natural numbers. If we had an algorithm that could solve every statement about natural numbers, it could certainly solve this one; but that would determine whether the original program halts, which is impossible, since the halting problem is undecidable.

Rice's theorem generalizes the theorem that the halting problem is unsolvable. It states that any non-trivial property of the partial function that is implemented by a program is undecidable. (A partial function is a function which may not always produce a result, and so is used to model programs, which can either produce results or fail to halt.) For example, the property "halt for the input 0" is undecidable. Note that this theorem holds only for properties of the partial function implemented by the program; Rice's Theorem does not apply to properties of the program itself. For example, "halt on input 0 within 100 steps" is not a property of the partial function that is implemented by the program—it is a property of the program implementing the partial function and is very much decidable.

Gregory Chaitin has defined a halting probability, represented by the symbol Ω, a type of real number that informally is said to represent the probability that a randomly produced program halts. These numbers have the same Turing degree as the halting problem. It is a normal and transcendental number which can be defined but cannot be completely computed. This means one can prove that there is no algorithm which produces the digits of Ω, although its first few digits can be calculated in simple cases.

While Turing's proof shows that there can be no general method or algorithm to determine whether algorithms halt, individual instances of that problem may very well be susceptible to attack. Given a specific algorithm, one can often show that it must halt for any input, and in fact computer scientists often do just that as part of a correctness proof. But each proof has to be developed specifically for the algorithm at hand; there is no mechanical, general way to determine whether algorithms on a Turing machine halt. However, there are some heuristics that can be used in an automated fashion to attempt to construct a proof, which succeed frequently on typical programs. This field of research is known as automated termination analysis.

Since the negative answer to the halting problem shows that there are problems that cannot be solved by a Turing machine, the Church–Turing thesis limits what can be accomplished by any machine that implements effective methods. However, not all machines conceivable to human imagination are subject to the Church–Turing thesis (e.g. oracle machines are not). It is an open question whether there can be actual deterministic physical processes that, in the long run, elude simulation by a Turing machine, and in particular whether any such hypothetical process could usefully be harnessed in the form of a calculating machine (a hypercomputer) that could solve the halting problem for a Turing machine amongst other things. It is also an open question whether any such unknown physical processes are involved in the working of the human brain, and whether humans can solve the halting problem.[2]

Representing the halting problem as a set

The conventional representation of decision problems is the set of objects possessing the property in question. The halting set

K := { (i, x) | program i will eventually halt if run with input x}

represents the halting problem.

This set is recursively enumerable, which means there is a computable function that lists all of the pairs (ix) it contains. However, the complement of this set is not recursively enumerable.

There are many equivalent formulations of the halting problem; any set whose Turing degree equals that of the halting problem is such a formulation. Examples of such sets include:

Sketch of proof

The proof[3] shows there is no total computable function that decides whether an arbitrary program i halts on arbitrary input x; that is, the following function h is not computable:

h(i,x) =
\begin{cases}
  1 & \text{if } \text{  program }i\text{ halts on input }x, \\
  0 & \text{otherwise.}
\end{cases}

Here program i refers to the i th program in an enumeration of all the programs of a fixed Turing-complete model of computation.

f(i,j) i i i i i i
1 2 3 4 5 6
j 1 1 0 0 1 0 1
j 2 0 0 0 1 0 0
j 3 0 1 0 1 0 1
j 4 1 0 0 1 0 0
j 5 0 0 0 1 1 1
j 6 1 1 0 0 1 0
f(i,i) 1 0 0 1 1 0
g(i) U 0 0 U U 0
Possible values for a total computable function f arranged in a 2D array. The orange cells are the diagonal. The values of f(i,i) and g(i) are shown at the bottom; U indicates that the function g is undefined for a particular input value.

The proof proceeds by directly establishing that every total computable function with two arguments differs from the required function h. To this end, given any total computable binary function f, the following partial function g is also computable by some program e:

g(i) =
\begin{cases} 
  0  & \text{if } f(i,i) = 0,\\
  \text{undefined} & \text{otherwise.} 
\end{cases}

The verification that g is computable relies on the following constructs (or their equivalents):

The following pseudocode illustrates a straightforward way to compute g:

  procedure compute_g(i):
      if f(i,i) == 0 then
          return 0
       else
          loop forever

Because g is partial computable, there must be a program e that computes g, by the assumption that the model of computation is Turing-complete. This program is one of all the programs on which the halting function h is defined. The next step of the proof shows that h(e,e) will not have the same value as f(e,e).

It follows from the definition of g that exactly one of the following two cases must hold:

In either case, f cannot be the same function as h. Because f was an arbitrary total computable function with two arguments, all such functions must differ from h.

This proof is analogous to Cantor's diagonal argument. One may visualize a two-dimensional array with one column and one row for each natural number, as indicated in the table above. The value of f(i,j) is placed at column i, row j. Because f is assumed to be a total computable function, any element of the array can be calculated using f. The construction of the function g can be visualized using the main diagonal of this array. If the array has a 0 at position (i,i), then g(i) is 0. Otherwise, g(i) is undefined. The contradiction comes from the fact that there is some column e of the array corresponding to g itself. If f were the halting function h, there would be a 1 at position (e,e) if and only if g(e) is defined. But g is constructed so that g(e) is defined if and only if there is a 0 in position (e,e).

Common pitfalls

The difficulty in the halting problem lies in the requirement that the decision procedure must work for all programs and inputs. A particular program either halts on a given input or does not halt. Consider one algorithm that always answers "halts" and another that always answers "doesn't halt." For any specific program and input, one of these two algorithms answers correctly, even though nobody may know which one.

There are programs (interpreters) that simulate the execution of whatever source code they are given. Such programs can demonstrate that a program does halt if this is the case: the interpreter itself will eventually halt its simulation, which shows that the original program halted. However, an interpreter will not halt if its input program does not halt, so this approach cannot solve the halting problem as stated. It does not successfully answer "doesn't halt" for programs that do not halt.

The halting problem is theoretically decidable for linear bounded automata (LBAs) or deterministic machines with finite memory. A machine with finite memory has a finite number of states, and thus any deterministic program on it must eventually either halt or repeat a previous state:

"...any finite-state machine, if left completely to itself, will fall eventually into a perfectly periodic repetitive pattern. The duration of this repeating pattern cannot exceed the number of internal states of the machine..."(italics in original, Minsky 1967, p. 24)

Minsky warns us, however, that machines such as computers with e.g. a million small parts, each with two states, will have on the order of 21,000,000 possible states:

"This is a 1 followed by about three hundred thousand zeroes ... Even if such a machine were to operate at the frequencies of cosmic rays, the aeons of galactic evolution would be as nothing compared to the time of a journey through such a cycle" (Minsky p. 25)

Minsky exhorts the reader to be suspicious—although a machine may be finite, and finite automata "have a number of theoretical limitations":

"...the magnitudes involved should lead one to suspect that theorems and arguments based chiefly on the mere finiteness [of] the state diagram may not carry a great deal of significance" (ibid).

For more on this issue of "intractability" see Busy beaver.

It can also be decided automatically whether a nondeterministic machine with finite memory halts on none of, some of, or all of the possible sequences of nondeterministic decisions, by enumerating states after each possible decision.

Formalization of the halting problem

In his original proof Turing formalized the concept of algorithm by introducing Turing machines. However, the result is in no way specific to them; it applies equally to any other model of computation that is equivalent in its computational power to Turing machines, such as Markov algorithms, Lambda calculus, Post systems, register machines, or tag systems.

What is important is that the formalization allows a straightforward mapping of algorithms to some data type that the algorithm can operate upon. For example, if the formalism lets algorithms define functions over strings (such as Turing machines) then there should be a mapping of these algorithms to strings, and if the formalism lets algorithms define functions over natural numbers (such as computable functions) then there should be a mapping of algorithms to natural numbers. The mapping to strings is usually the most straightforward, but strings over an alphabet with n characters can also be mapped to numbers by interpreting them as numbers in an n-ary numeral system.

Relationship with Gödel's incompleteness theorem

The concepts raised by Gödel's incompleteness theorems are very similar to those raised by the halting problem, and the proofs are quite similar. In fact, a weaker form of the First Incompleteness Theorem is an easy consequence of the undecidability of the halting problem. This weaker form differs from the standard statement of the incompleteness theorem by asserting that a complete, consistent and sound axiomatization of all statements about natural numbers is unachievable. The "sound" part is the weakening: it means that we require the axiomatic system in question to prove only true statements about natural numbers (it's very important to observe that the statement of the standard form of Gödel's First Incompleteness Theorem is completely unconcerned with the question of truth, and only concerns formal provability).

The weaker form of the theorem can be proven from the undecidability of the halting problem as follows. Assume that we have a consistent and complete axiomatization of all true first-order logic statements about natural numbers. Then we can build an algorithm that enumerates all these statements. This means that there is an algorithm N(n) that, given a natural number n, computes a true first-order logic statement about natural numbers such that, for all the true statements, there is at least one n such that N(n) yields that statement. Now suppose we want to decide if the algorithm with representation a halts on input i. By using Kleene's T predicate, we can express the statement "a halts on input i" as a statement H(a, i) in the language of arithmetic. Since the axiomatization is complete it follows that either there is an n such that N(n) = H(a, i) or there is an n' such that N(n') = ¬ H(a, i). So if we iterate over all n until we either find H(a, i) or its negation, we will always halt. This means that this gives us an algorithm to decide the halting problem. Since we know that there cannot be such an algorithm, it follows that the assumption that there is a consistent and complete axiomatization of all true first-order logic statements about natural numbers must be false.

Recognizing partial solutions

There are many programs that either return a correct answer to the halting problem or do not return an answer at all. If it were possible to decide whether any given program gives only correct answers, one might hope to collect a large number of such programs and run them in parallel, in the hope of being able to determine whether any programs halt. Curiously, recognizing such partial halting solvers (PHS) is just as hard as the halting problem itself.

Suppose someone claims that program PHSR is a partial halting solver recognizer. Construct a program H:

input a program P
X := "input Q. if Q = P output 'halts' else loop forever"
run PHSR with X as input

If PHSR recognizes the constructed program X as a partial halting solver, that means that P, the only input for which X produces a result, halts. If PHSR fails to recognize X, then it must be because P does not halt. Therefore H can decide whether an arbitrary program P halts; it solves the halting problem. Since this is impossible, the program PHSR could not have been a partial halting solver recognizer as claimed. Therefore, no program can be a complete partial halting solver recognizer.

Another example, HT, of a Turing machine which gives correct answers only for some instances of the halting problem can be described by the requirements that, if HT is started scanning a field which carries the first of a finite string of a consecutive "1"s, followed by one field with symbol "0" (i. e. a blank field), and followed in turn by a finite string of i consecutive "1"s, on an otherwise blank tape, then

While its existence has not been refuted (essentially: because there's no Turing machine which would halt only if started on a blank tape), such a Turing machine HT would solve the halting problem only partially either (because it doesn't necessarily scan the symbol "1" in the final state, if the Turing machine represented by a does halt when given the starting state and input represented by i, as explicit statements of the halting problem for Turing machines may require).

History of the halting problem

See also

Notes

  1. ^ In none of his work did Turing use the word "halting" or "termination". Turing's biographer Hodges does not have the word "halting" or words "halting problem" in his index. The earliest known use of the words "halting problem" is in a proof by Davis (p. 70–71, Davis 1958):
    "Theorem 2.2 There exists a Turing machine whose halting problem is recursively unsolvable.
    "A related problem is the printing problem for a simple Turing machine Z with respect to a symbol Si" (p. 70).
    Davis adds no attribution for his proof, so one infers that it is original with him. But Davis has pointed out that a statement of the proof exists informally in Kleene (1952) on page 382. Copeland (2004) states that:
    "The halting problem was so named (and it appears, first stated) by Martin Davis [cf Copeland footnote 61]... (It is often said that Turing stated and proved the halting theorem in 'On Computable Numbers', but strictly this is not true)." (p. 40)
    For more see the History section above.
  2. ^ B. Jack Copeland, Computation in Luciano Floridi (ed.), The Blackwell guide to the philosophy of computing and information, Wiley-Blackwell, 2004, ISBN 0-631-22919-1, p. 15
  3. ^ This proof can be found, for example, in Penrose (1989). The Emporer's New Mind. Oxford UK: Oxford University Press. pp. 57–63. ISBN 0-19-851973-7. 

References

External links